蚊子传播的疾病(MBD),例如登革热病毒,基孔肯雅病毒和西尼罗河病毒,每年在全球造成超过100万人死亡。由于许多这样的疾病都被伊蚊和库氏蚊子传播,因此跟踪这些幼虫对于缓解MBD的传播至关重要。即使公民科学成长并获得了较大的蚊子图像数据集,蚊子图像的手动注释变得越来越耗时且效率低下。先前的研究使用计算机视觉识别蚊子物种,卷积神经网络(CNN)已成为图像分类的事实。但是,这些模型通常需要大量的计算资源。这项研究介绍了视觉变压器(VIT)在比较研究中的应用,以改善伊蚊和库尔克斯幼虫的图像分类。在蚊子幼虫图像数据上对两个VIT模型,Vit-Base和CVT-13以及两个CNN模型进行了RESNET-18和CORVNEXT的培训,并比较确定最有效的模型,以将蚊子幼虫区分为AEDES或CULEX。测试表明,Convnext获得了所有分类指标的最大值,证明了其对蚊子幼虫分类的生存能力。基于这些结果,未来的研究包括通过结合CNN和Transformer架构元素来创建专门为蚊子幼虫分类设计的模型。
translated by 谷歌翻译
Recent work leverages the expressive power of generative adversarial networks (GANs) to generate labeled synthetic datasets. These dataset generation methods often require new annotations of synthetic images, which forces practitioners to seek out annotators, curate a set of synthetic images, and ensure the quality of generated labels. We introduce the HandsOff framework, a technique capable of producing an unlimited number of synthetic images and corresponding labels after being trained on less than 50 pre-existing labeled images. Our framework avoids the practical drawbacks of prior work by unifying the field of GAN inversion with dataset generation. We generate datasets with rich pixel-wise labels in multiple challenging domains such as faces, cars, full-body human poses, and urban driving scenes. Our method achieves state-of-the-art performance in semantic segmentation, keypoint detection, and depth estimation compared to prior dataset generation approaches and transfer learning baselines. We additionally showcase its ability to address broad challenges in model development which stem from fixed, hand-annotated datasets, such as the long-tail problem in semantic segmentation.
translated by 谷歌翻译
Detecting actions in untrimmed videos should not be limited to a small, closed set of classes. We present a simple, yet effective strategy for open-vocabulary temporal action detection utilizing pretrained image-text co-embeddings. Despite being trained on static images rather than videos, we show that image-text co-embeddings enable openvocabulary performance competitive with fully-supervised models. We show that the performance can be further improved by ensembling the image-text features with features encoding local motion, like optical flow based features, or other modalities, like audio. In addition, we propose a more reasonable open-vocabulary evaluation setting for the ActivityNet data set, where the category splits are based on similarity rather than random assignment.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
Recently, AutoFlow has shown promising results on learning a training set for optical flow, but requires ground truth labels in the target domain to compute its search metric. Observing a strong correlation between the ground truth search metric and self-supervised losses, we introduce self-supervised AutoFlow to handle real-world videos without ground truth labels. Using self-supervised loss as the search metric, our self-supervised AutoFlow performs on par with AutoFlow on Sintel and KITTI where ground truth is available, and performs better on the real-world DAVIS dataset. We further explore using self-supervised AutoFlow in the (semi-)supervised setting and obtain competitive results against the state of the art.
translated by 谷歌翻译
The process of screening molecules for desirable properties is a key step in several applications, ranging from drug discovery to material design. During the process of drug discovery specifically, protein-ligand docking, or chemical docking, is a standard in-silico scoring technique that estimates the binding affinity of molecules with a specific protein target. Recently, however, as the number of virtual molecules available to test has rapidly grown, these classical docking algorithms have created a significant computational bottleneck. We address this problem by introducing Deep Surrogate Docking (DSD), a framework that applies deep learning-based surrogate modeling to accelerate the docking process substantially. DSD can be interpreted as a formalism of several earlier surrogate prefiltering techniques, adding novel metrics and practical training practices. Specifically, we show that graph neural networks (GNNs) can serve as fast and accurate estimators of classical docking algorithms. Additionally, we introduce FiLMv2, a novel GNN architecture which we show outperforms existing state-of-the-art GNN architectures, attaining more accurate and stable performance by allowing the model to filter out irrelevant information from data more efficiently. Through extensive experimentation and analysis, we show that the DSD workflow combined with the FiLMv2 architecture provides a 9.496x speedup in molecule screening with a <3% recall error rate on an example docking task. Our open-source code is available at https://github.com/ryienh/graph-dock.
translated by 谷歌翻译
Molecular shape and geometry dictate key biophysical recognition processes, yet many graph neural networks disregard 3D information for molecular property prediction. Here, we propose a new contrastive-learning procedure for graph neural networks, Molecular Contrastive Learning from Shape Similarity (MolCLaSS), that implicitly learns a three-dimensional representation. Rather than directly encoding or targeting three-dimensional poses, MolCLaSS matches a similarity objective based on Gaussian overlays to learn a meaningful representation of molecular shape. We demonstrate how this framework naturally captures key aspects of three-dimensionality that two-dimensional representations cannot and provides an inductive framework for scaffold hopping.
translated by 谷歌翻译
Recent studies in Vision-and-Language Navigation (VLN) train RL agents to execute natural-language navigation instructions in photorealistic environments, as a step towards robots that can follow human instructions. However, given the scarcity of human instruction data and limited diversity in the training environments, these agents still struggle with complex language grounding and spatial language understanding. Pretraining on large text and image-text datasets from the web has been extensively explored but the improvements are limited. We investigate large-scale augmentation with synthetic instructions. We take 500+ indoor environments captured in densely-sampled 360 degree panoramas, construct navigation trajectories through these panoramas, and generate a visually-grounded instruction for each trajectory using Marky, a high-quality multilingual navigation instruction generator. We also synthesize image observations from novel viewpoints using an image-to-image GAN. The resulting dataset of 4.2M instruction-trajectory pairs is two orders of magnitude larger than existing human-annotated datasets, and contains a wider variety of environments and viewpoints. To efficiently leverage data at this scale, we train a simple transformer agent with imitation learning. On the challenging RxR dataset, our approach outperforms all existing RL agents, improving the state-of-the-art NDTW from 71.1 to 79.1 in seen environments, and from 64.6 to 66.8 in unseen test environments. Our work points to a new path to improving instruction-following agents, emphasizing large-scale imitation learning and the development of synthetic instruction generation capabilities.
translated by 谷歌翻译
为了成为人类的有效伴侣,机器人必须越来越舒适地与环境接触。不幸的是,机器人很难区分``足够的''和``太多''力:完成任务需要一些力量,但太多可能会损害设备或伤害人类。设计合规的反馈控制器(例如刚度控制)的传统方法需要对控制参数进行手工调整,并使建立安全,有效的机器人合作者变得困难。在本文中,我们提出了一种新颖而易于实现的力反馈控制器,该反馈控制器使用控制屏障功能(CBF)直接从用户的最大允许力和扭矩的用户规格中得出合并的控制器。我们比较了传统僵硬控制的方法,以证明控制架构的潜在优势,并在人类机器人协作任务中证明了控制器的有效性:对笨重对象的合作操纵。
translated by 谷歌翻译
大型语言模型(LLM)从人类的指示中解开了任务计划的新功能。但是,事先尝试将LLMS应用于现实世界的机器人任务受到周围场景中缺乏接地的限制。在本文中,我们开发了NLMAP,这是一个开放式摄影和可查询场景表示,以解决此问题。 NLMAP是一个框架,可以将上下文信息收集到LLM计划者中,从而在生成上下文条件条件计划之前,可以在场景中查看和查询可用的对象。 NLMAP首先使用视觉语言模型(VLM)建立自然语言可查询场景表示。基于LLM的对象建议模块解析指令并提出涉及的对象,以查询场景表示以获取对象可用性和位置。然后,LLM规划师计划提供有关场景的此类信息。 NLMAP允许机器人在没有固定的对象列表或可执行选项的情况下操作,从而使真实的机器人操作无法通过以前的方法实现。项目网站:https://nlmap-saycan.github.io
translated by 谷歌翻译